2,699 research outputs found

    A remark on the multipliers on spaces of weak products of functions

    Get PDF
    If H\mathcal{H} denotes a Hilbert space of analytic functions on a region Ξ©βŠ†Cd\Omega \subseteq \mathbb{C}^d, then the weak product is defined by HβŠ™H={h=βˆ‘n=1∞fngn:βˆ‘n=1∞βˆ₯fnβˆ₯Hβˆ₯gnβˆ₯H<∞}.\mathcal{H}\odot\mathcal{H}=\left\{h=\sum_{n=1}^\infty f_n g_n : \sum_{n=1}^\infty \|f_n\|_{\mathcal{H}}\|g_n\|_{\mathcal{H}} <\infty\right\}. We prove that if H\mathcal{H} is a first order holomorphic Besov Hilbert space on the unit ball of Cd\mathbb{C}^d, then the multiplier algebras of H\mathcal{H} and of HβŠ™H\mathcal{H}\odot\mathcal{H} coincide.Comment: v1: 6 pages. To appear Concr. Ope

    The Structure of Inner Multipliers on Spaces with Complete Nevanlinna Pick Kernels

    Get PDF
    We establish some multivariate generalizations of the Beurling-Lax-Halmos theorem.Comment: 21 page

    Main Memory Adaptive Indexing for Multi-core Systems

    Full text link
    Adaptive indexing is a concept that considers index creation in databases as a by-product of query processing; as opposed to traditional full index creation where the indexing effort is performed up front before answering any queries. Adaptive indexing has received a considerable amount of attention, and several algorithms have been proposed over the past few years; including a recent experimental study comparing a large number of existing methods. Until now, however, most adaptive indexing algorithms have been designed single-threaded, yet with multi-core systems already well established, the idea of designing parallel algorithms for adaptive indexing is very natural. In this regard only one parallel algorithm for adaptive indexing has recently appeared in the literature: The parallel version of standard cracking. In this paper we describe three alternative parallel algorithms for adaptive indexing, including a second variant of a parallel standard cracking algorithm. Additionally, we describe a hybrid parallel sorting algorithm, and a NUMA-aware method based on sorting. We then thoroughly compare all these algorithms experimentally; along a variant of a recently published parallel version of radix sort. Parallel sorting algorithms serve as a realistic baseline for multi-threaded adaptive indexing techniques. In total we experimentally compare seven parallel algorithms. Additionally, we extensively profile all considered algorithms. The initial set of experiments considered in this paper indicates that our parallel algorithms significantly improve over previously known ones. Our results suggest that, although adaptive indexing algorithms are a good design choice in single-threaded environments, the rules change considerably in the parallel case. That is, in future highly-parallel environments, sorting algorithms could be serious alternatives to adaptive indexing.Comment: 26 pages, 7 figure

    Only Aggressive Elephants are Fast Elephants

    Full text link
    Yellow elephants are slow. A major reason is that they consume their inputs entirely before responding to an elephant rider's orders. Some clever riders have trained their yellow elephants to only consume parts of the inputs before responding. However, the teaching time to make an elephant do that is high. So high that the teaching lessons often do not pay off. We take a different approach. We make elephants aggressive; only this will make them very fast. We propose HAIL (Hadoop Aggressive Indexing Library), an enhancement of HDFS and Hadoop MapReduce that dramatically improves runtimes of several classes of MapReduce jobs. HAIL changes the upload pipeline of HDFS in order to create different clustered indexes on each data block replica. An interesting feature of HAIL is that we typically create a win-win situation: we improve both data upload to HDFS and the runtime of the actual Hadoop MapReduce job. In terms of data upload, HAIL improves over HDFS by up to 60% with the default replication factor of three. In terms of query execution, we demonstrate that HAIL runs up to 68x faster than Hadoop. In our experiments, we use six clusters including physical and EC2 clusters of up to 100 nodes. A series of scalability experiments also demonstrates the superiority of HAIL.Comment: VLDB201

    Weak products of complete Pick spaces

    Full text link
    Let H\mathcal H be the Drury-Arveson or Dirichlet space of the unit ball of Cd\mathbb C^d. The weak product HβŠ™H\mathcal H\odot\mathcal H of H\mathcal H is the collection of all functions hh that can be written as h=βˆ‘n=1∞fngnh=\sum_{n=1}^\infty f_n g_n, where βˆ‘n=1∞βˆ₯fnβˆ₯βˆ₯gnβˆ₯<∞\sum_{n=1}^\infty \|f_n\|\|g_n\|<\infty. We show that HβŠ™H\mathcal H\odot\mathcal H is contained in the Smirnov class of H\mathcal H, i.e. every function in HβŠ™H\mathcal H\odot\mathcal H is a quotient of two multipliers of H\mathcal H, where the function in the denominator can be chosen to be cyclic in H\mathcal H. As a consequence we show that the map Nβ†’closHβŠ™HN\mathcal N \to clos_{\mathcal H\odot\mathcal H} \mathcal N establishes a 1-1 and onto correspondence between the multiplier invariant subspaces of H\mathcal H and of HβŠ™H\mathcal H\odot\mathcal H. The results hold for many weighted Besov spaces H\mathcal H in the unit ball of Cd\mathbb C^d provided the reproducing kernel has the complete Pick property. One of our main technical lemmas states that for weighted Besov spaces H\mathcal H that satisfy what we call the multiplier inclusion condition any bounded column multiplication operator Hβ†’βŠ•n=1∞H\mathcal H \to \oplus_{n=1}^\infty \mathcal H induces a bounded row multiplication operator βŠ•n=1∞Hβ†’H\oplus_{n=1}^\infty \mathcal H \to \mathcal H. For the Drury-Arveson space Hd2H^2_d this leads to an alternate proof of the characterization of interpolating sequences in terms of weak separation and Carleson measure conditions.Comment: minor change
    • …
    corecore